190 research outputs found

    Correction: The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    Get PDF
    Notice of republication: This article was republished on April 23, 2015, to correct errors in the equations that were introduced during the typesetting process. The publisher apologizes for the errors. Please download this article again to view the correct version

    Efficient inference for time-varying behavior during learning

    Get PDF
    The process of learning new behaviors over time is a problem of great interest in both neuroscience and artificial intelligence. However, most standard analyses of animal training data either treat behavior as fixed or track only coarse performance statistics (e.g., accuracy, bias), providing limited insight into the evolution of the policies governing behavior. To overcome these limitations, we propose a dynamic psychophysical model that efficiently tracks trial-to-trial changes in behavior over the course of training. Our model consists of a dynamic logistic regression model, parametrized by a set of time-varying weights that express dependence on sensory stimuli as well as task-irrelevant covariates, such as stimulus, choice, and answer history. Our implementation scales to large behavioral datasets, allowing us to infer 500K parameters (e.g. 10 weights over 50K trials) in minutes on a desktop computer. We optimize hyperparameters governing how rapidly each weight evolves over time using the decoupled Laplace approximation, an efficient method for maximizing marginal likelihood in non-conjugate models. To illustrate performance, we apply our method to psychophysical data from both rats and human subjects learning a delayed sensory discrimination task. The model successfully tracks the psychophysical weights of rats over the course of training, capturing day-to-day and trial-to-trial fluctuations that underlie changes in performance, choice bias, and dependencies on task history. Finally, we investigate why rats frequently make mistakes on easy trials, and suggest that apparent lapses can be explained by sub-optimal weighting of known task covariates

    Extracting the dynamics of behavior in sensory decision-making experiments

    Get PDF
    Decision-making strategies evolve during training and can continue to vary even in well-trained animals. However, studies of sensory decision-making tend to characterize behavior in terms of a fixed psychometric function that is fit only after training is complete. Here, we present PsyTrack, a flexible method for inferring the trajectory of sensory decision-making strategies from choice data. We apply PsyTrack to training data from mice, rats, and human subjects learning to perform auditory and visual decision-making tasks. We show that it successfully captures trial-to-trial fluctuations in the weighting of sensory stimuli, bias, and task-irrelevant covariates such as choice and stimulus history. This analysis reveals dramatic differences in learning across mice and rapid adaptation to changes in task statistics. PsyTrack scales easily to large datasets and offers a powerful tool for quantifying time-varying behavior in a wide variety of animals and tasks

    Goodness-of-fit tests for neural population models: the multivariate time-rescaling theorem

    Get PDF
    Poster Presentation from Nineteenth Annual Computational Neuroscience Meeting: CNS*2010 San Antonio, TX, USA. 24-30 July 2010 Statistical models of neural activity are at the core of the field of modern computational neuroscience. The activity of single neurons has been modeled to successfully explain dependencies of neural dynamics to its own spiking history, to external stimuli or other covariates [1]. Recently, there has been a growing interest in modeling spiking activity of a population of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing (existing models include generalized linear models [2,3] or maximum-entropy approaches [4]). For point-process-based models of single neurons, the time-rescaling theorem has proven to be a useful toolbox to assess goodness-of-fit. In its univariate form, the time-rescaling theorem states that if the conditional intensity function of a point process is known, then its inter-spike intervals can be transformed or “rescaled” so that they are independent and exponentially distributed [5]. However, the theorem in its original form lacks sensitivity to detect even strong dependencies between neurons. Here, we present how the theorem can be extended to be applied to neural population models and we provide a step-by-step procedure to perform the statistical tests. We then apply both the univariate and multivariate tests to simplified toy models, but also to more complicated many-neuron models and to neuronal populations recorded in V1 of awake monkey during natural scenes stimulation. We demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. ..

    Incremental Mutual Information: A New Method for Characterizing the Strength and Dynamics of Connections in Neuronal Circuits

    Get PDF
    Understanding the computations performed by neuronal circuits requires characterizing the strength and dynamics of the connections between individual neurons. This characterization is typically achieved by measuring the correlation in the activity of two neurons. We have developed a new measure for studying connectivity in neuronal circuits based on information theory, the incremental mutual information (IMI). By conditioning out the temporal dependencies in the responses of individual neurons before measuring the dependency between them, IMI improves on standard correlation-based measures in several important ways: 1) it has the potential to disambiguate statistical dependencies that reflect the connection between neurons from those caused by other sources (e. g. shared inputs or intrinsic cellular or network mechanisms) provided that the dependencies have appropriate timescales, 2) for the study of early sensory systems, it does not require responses to repeated trials of identical stimulation, and 3) it does not assume that the connection between neurons is linear. We describe the theory and implementation of IMI in detail and demonstrate its utility on experimental recordings from the primate visual system

    Receptive Field Inference with Localized Priors

    Get PDF
    The linear receptive field describes a mapping from sensory stimuli to a one-dimensional variable governing a neuron's spike response. However, traditional receptive field estimators such as the spike-triggered average converge slowly and often require large amounts of data. Bayesian methods seek to overcome this problem by biasing estimates towards solutions that are more likely a priori, typically those with small, smooth, or sparse coefficients. Here we introduce a novel Bayesian receptive field estimator designed to incorporate locality, a powerful form of prior information about receptive field structure. The key to our approach is a hierarchical receptive field model that flexibly adapts to localized structure in both spacetime and spatiotemporal frequency, using an inference method known as empirical Bayes. We refer to our method as automatic locality determination (ALD), and show that it can accurately recover various types of smooth, sparse, and localized receptive fields. We apply ALD to neural data from retinal ganglion cells and V1 simple cells, and find it achieves error rates several times lower than standard estimators. Thus, estimates of comparable accuracy can be achieved with substantially less data. Finally, we introduce a computationally efficient Markov Chain Monte Carlo (MCMC) algorithm for fully Bayesian inference under the ALD prior, yielding accurate Bayesian confidence intervals for small or noisy datasets

    Intrinsic gain modulation and adaptive neural coding

    Get PDF
    In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio

    Fast, scalable, Bayesian spike identification for multi-electrode arrays

    Get PDF
    We present an algorithm to identify individual neural spikes observed on high-density multi-electrode arrays (MEAs). Our method can distinguish large numbers of distinct neural units, even when spikes overlap, and accounts for intrinsic variability of spikes from each unit. As MEAs grow larger, it is important to find spike-identification methods that are scalable, that is, the computational cost of spike fitting should scale well with the number of units observed. Our algorithm accomplishes this goal, and is fast, because it exploits the spatial locality of each unit and the basic biophysics of extracellular signal propagation. Human intervention is minimized and streamlined via a graphical interface. We illustrate our method on data from a mammalian retina preparation and document its performance on simulated data consisting of spikes added to experimentally measured background noise. The algorithm is highly accurate

    An International Laboratory for Systems and Computational Neuroscience

    Get PDF
    The neural basis of decision-making has been elusive and involves the coordinated activity of multiple brain structures. This NeuroView, by the International Brain Laboratory (IBL), discusses their efforts to develop a standardized mouse decision-making behavior, to make coordinated measurements of neural activity across the mouse brain, and to use theory and analyses to uncover the neural computations that support decision-making. The neural basis of decision-making has been elusive and involves the coordinated activity of multiple brain structures. This NeuroView, by the International Brain Laboratory (IBL), discusses their efforts to develop a standardized mouse decision-making behavior, to make coordinated measurements of neural activity across the mouse brain, and to use theory and analyses to uncover the neural computations that support decision-making

    Of Toasters and Molecular Ticker Tapes

    Get PDF
    Experiments in systems neuroscience can be seen as consisting of three steps: (1) selecting the signals we are interested in, (2) probing the system with carefully chosen stimuli, and (3) getting data out of the brain. Here I discuss how emerging techniques in molecular biology are starting to improve these three steps. To estimate its future impact on experimental neuroscience, I will stress the analogy of ongoing progress with that of microprocessor production techniques. These techniques have allowed computers to simplify countless problems; because they are easier to use than mechanical timers, they are even built into toasters. Molecular biology may advance even faster than computer speeds and has made immense progress in understanding and designing molecules. These advancements may in turn produce impressive improvements to each of the three steps, ultimately shifting the bottleneck from obtaining data to interpreting it
    corecore